52 research outputs found

    Agent Behaviour Simulator (ABS):a platform for urban behaviour development

    Get PDF
    Computer Graphics have become important for many applicationsand the quality of the produced images have greatly improved. Oneof the interesting remaining problems is the representation of densedynamic environments such as populated cities. Although recentlywe saw some successfulwork on the rendering such environments,the real?time simulation of virtual cities populated by thousands ofintelligent animated agents is still very challenging.In this paperwe describe a platformthat aims to accelerate the developmentof agent behaviours. The platform makes it easy to enterlocal rules and callbacks which govern the individual behaviours.It automatically performs the routine tasks such as collision detectionallowing the user to concentrate on defining the more involvedtasks. The platform is based on a 2D-grid with a four-layered structure.The two first layers are used to compute the collision detectionagainst the environment and other agents and the last two are usedfor more complex behaviours.A set of visualisation tools is incorporated that allows the testingof the real?time simulation. The choices made for the visualisationallow the user to better understand the way agents move inside theworld and how they take decisions, so that the user can evaluate ifit simulates the expected behaviour.Experimentation with the system has shown that behaviours inenvironments with thousands of agents can be developed and visualisedin effortlessly

    Real-time shadows for animated crowds in virtual cities

    Get PDF

    Automatic generation of consistent shadows for Augmented Reality

    Get PDF
    Sponsor : CHCCS The Canadian Human-Computer Communications SocietyInternational audienceIn the context of mixed reality, it is difficult to simulate shadow interaction between real and virtual objects when only an approximate geometry of the real scene and the light source is known. In this paper, we present a realtime rendering solution to simulate colour-consistent virtual shadows in a real scene. The rendering consists of a three-step mechanism: shadow detection, shadow protection and shadow generation. In the shadow detection step, the shadows due to real objects are automatically identified using the texture information and an initial estimate of the shadow region. In the next step, a protection mask is created to prevent further rendering in those shadow regions. Finally, the virtual shadows are generated using shadow volumes and a pre-defined scaling factor that adapts the intensity of the virtual shadows to the real shadow. The procedure detects and generates shadows in real time, consistent with those already present in the scene and offers an automatic and real-time solution for common illumination, suitable for augmented reality

    Coupling dense point cloud correspondence and template model fitting for 3D human pose and shape reconstruction from a single depth image

    Get PDF
    International audienceIn this paper, we address the problem of capturing both the shape and the pose of a character using a single depth sensor. Some previous works proposed to fit a parametric generic human template in the depth image, while others developed deep learning (DL) approaches to find the correspondence between depth pixels and vertices of the template. In this paper, we explore the possibility of combining these two approaches to benefit from their respective advantages. The hypothesis is that DL dense correspondence should provide more accurate information to template model fitting, compared to previous approaches which only use estimated joint position only. Thus, we stacked a stateof-the-art DL dense correspondence method (namely double U-Net) and parametric model fitting (namely Simplify-X). The experiments on the SURREAL [1], DFAUST datasets [2] and a subset of AMASS [3], show that this hybrid approach enables us to enhance pose and shape estimation compared to using DL or model fitting separately. This result opens new perspectives in pose and shape estimation in many applications where complex or invasive motion capture setups are impossible, such as sports, dance, ergonomic assessment, etc

    Ré-éclairage et remodélisation interactifs des scènes réelles pour la réalité augmentée

    No full text
    Computer augmented reality is a rapidly emerging field allowing users to mix virtual and real worlds. In this thesis, we concentrate on interactive relighting, i.e. virtually modifying lighting properties, and remodelling of real interior scenes, with realistic mixed lighting effects. We have developed two new methods to achieve these objectives. For the two methods, we use non exhaustive input data to estimate radiometric properties of real scenes. In the first approach, we use a simple textured model of the real scene. We first remove original lighting effects from the textures using a new algorithm based on radiosity equations. During the interactive modification step (which achieve an update rate of 3 images per second), lighting effects are simulated by incremental algorithms, and the display is done using the graphics hardware. We developed a second method for which the real scene is known from photographs taken under several different but controlled lighting conditions. Diffuse reflectances are computed pixel per pixel for the input image. New lighting conditions are then simulated using ray tracing for direct illumination, and optimised hierarchical radiosity for indirect illumination. Finally, we propose a new photometric calibration method, for non professional cameras, and we present algorithms which improve the quality of the reflectance estimate for both methods.La réalité augmentée assistée par ordinateur est un domaine émergeant qui progresse rapidement. Son principe est de mélanger le monde réel et le monde virtuel. Dans cette thèse, nous nous intéressons plus particulièrement au ré-éclairage, c'est-à-dire à la modification virtuelle des propriétés l'éclairage, et à la remodélisation des scènes réelles d'intérieur, de façon interactive et en conservant les effets réalistes de l'éclairage mixte (réel et virtuel). Nous avons développé deux méthodes qui réalisent ces objectifs. Les deux méthodes utilisent des données d'entrée non exhaustives pour retrouver les propriétés radiométriques des scènes réelles. Dans notre première approche, la scène réelle est représentée par un modèle géométrique simple et texturé. En nous basant sur les équations de la radiosité, nous modifions les textures afin d'enlever les effets de l'éclairage réel (ombres, etc.). Lors de manipulations interactives qui atteignent un taux de rafraîchissement de 3 images par secondes, les effets d'éclairage sont simulés par des algorithmes incrémentaux, et affichés en utilisant le matériel graphique. Nous avons ensuite développé une deuxième méthode de ré-éclairage pour laquelle la scène réelle est connue depuis plusieurs photographies prises sous des éclairages différents mais contrôlés. Une réflectance diffuse est estimée pour chaque pixel de l'image d'origine. L'éclairage est ensuite simulé en combinant une méthode de lancer de rayon pour l'éclairage direct, et une méthode de radiosité hiérarchique optimisée pour l'éclairage indirect. Cette méthode permet en plus du ré-éclairage, la remodélisation interactive. Enfin, nous proposons une méthode de calibration photométrique pour du matériel photographique non professionnel, et nous présentons des algorithmes permettant une amélioration de la qualité de l'estimation de la réflectance pour chacune des méthodes

    Analysis of Haptics Evolution from Web Search Engines' Data

    No full text
    International audienceThis article proposes using search engine results data such as the number of results containing relevant terms, to measure the evolution of Haptics, the field devoted to the science and technology of the sense of touch. Haptics is a complex discipline which is at the intersection of the knowledge of several specialized fields like robotics, computer science, psychology, and mathematics. It can also appear as a new and emergent discipline due to the fact that many promising haptic interfaces, which allow innovative multimodal applications in many fields, have become mature only recently. The study presented in this article uses data collected at different periods of time (in December 1999, January 2004, January 2005, November 2006 and April 2007) on Web search engines from requests on three different terminologies: haptique, haptik and haptics, taken respectively from French, German, and English languages. The evolution of Haptics is seemingly reflected by to the online frequency of these specific terms over time. This evolution has been measured by considering the Internet community through search engines such as Google or Yahoo!

    Real-Time Reflection on Moving Vehicles in Urban Environments

    No full text
    In the context of virtual reality, the simulation of complex environments with many animated objects is becoming more and more common. Virtual reality applications have always promoted the development of new efficient algorithms and image-based rendering techniques for real-time interaction. In this paper, we propose a technique which allows the realtime simulation in a city of the reflections of static geometry (eg. building) on specular dynamic objects (vehicles). For this, we introduce the idea of multiple environment maps. We pre-compute a set of reference environment maps at strategic positions in the scene, that are used at run time and for each visible dynamic object, to compute local environment maps by resampling images. To efficiently manage a small number of reference environment maps, compared to the scene dimension, for each vertex of the reconstructed environment we perform a ray tracing in a heightfield representation of the scene. We control the frame rate by adaptative reconstruction of environment maps. We have implemented this approach, and the results show that it is efficient and scalable to many dynamic objects while maintaining interactive frame rates
    corecore